52 research outputs found

    Ternary Syndrome Decoding with Large Weight

    Get PDF
    The Syndrome Decoding problem is at the core of many code-based cryptosystems. In this paper, we study ternary Syndrome Decoding in large weight. This problem has been introduced in the Wave signature scheme but has never been thoroughly studied. We perform an algorithmic study of this problem which results in an update of the Wave parameters. On a more fundamental level, we show that ternary Syndrome Decoding with large weight is a really harder problem than the binary Syndrome Decoding problem, which could have several applications for the design of code-based cryptosystems

    Using LDGM Codes and Sparse Syndromes to Achieve Digital Signatures

    Full text link
    In this paper, we address the problem of achieving efficient code-based digital signatures with small public keys. The solution we propose exploits sparse syndromes and randomly designed low-density generator matrix codes. Based on our evaluations, the proposed scheme is able to outperform existing solutions, permitting to achieve considerable security levels with very small public keys.Comment: 16 pages. The final publication is available at springerlink.co

    QC-MDPC decoders with several shades of gray

    Get PDF
    QC-MDPC code-based KEMs rely on decoders that have a small or even negligible Decoding Failure Rate (DFR). These decoders should be efficient and implementable in constant-time. One example for a QC-MDPC KEM is the Round-2 candidate of the NIST PQC standardization project, BIKE . We have recently shown that the Black-Gray decoder achieves the required properties. In this paper, we deffine several new variants of the Black-Gray decoder. One of them, called Black-Gray-Flip, needs only 7 steps to achieve a smaller DFR than Black-Gray with 9 steps, for the same block size. On current AVX512 platforms, our BIKE-1 (Level-1) constant-time decapsulation is 1:9x faster than the previous decapsulation with Black-Gray. We also report an additional 1:25x decapsulating speedup using the new AVX512-VBMI2 and vector-PCLMULQDQ instructions available on Ice-Lake micro-architecture

    About Low DFR for QC-MDPC Decoding

    Get PDF
    International audienceMcEliece-like code-based key exchange mechanisms using QC-MDPC codes can reach IND-CPA security under hardness assumptions from coding theory, namely quasi-cyclic syndrome decoding and quasi-cyclic codeword finding. To reach higher security requirements, like IND-CCA security, it is necessary in addition to prove that the decoding failure rate (DFR) is negligible, for some decoding algorithm and a proper choice of parameters. Getting a formal proof of a low DFR is a difficult task. Instead, we propose to ensure this low DFR under some additional security assumption on the decoder. This assumption relates to the asymptotic behavior of the decoder and is supported by several other works. We define a new decoder, Backflip, which features a low DFR. We evaluate the Backflip decoder by simulation and extrapolate its DFR under the decoder security assumption. We also measure the accuracy of our simulation data, in the form of confidence intervals, using standard techniques from communication systems

    Fast polynomial inversion for post quantum QC-MDPC cryptography

    Get PDF
    The NIST PQC standardization project evaluates multiple new designs for post-quantum Key Encapsulation Mechanisms (KEMs). Some of them present challenging tradeoffs between communication bandwidth and computational overheads. An interesting case is the set of QC-MDPC based KEMs. Here, schemes that use the Niederreiter framework require only half the communication bandwidth compared to schemes that use the McEliece framework. However, this requires costly polynomial inversion during the key generation, which is prohibitive when ephemeral keys are used. One example is BIKE, where the BIKE-1 variant uses McEliece and the BIKE-2 variant uses Niederreiter. This paper shows an optimized constant-time polynomial inversion method that makes the computation costs of BIKE-2 key generation tolerable. We report a speedup of 11.8x over the commonly used NTL library, and 55.5 over OpenSSL. We achieve additional speedups by leveraging the latest Intel\u27s Vector-PCLMULQDQ instructions on a laptop machine, 14.3x over NTL and 96.8x over OpenSSL. With this, BIKE-2 becomes a competitive variant of BIKE

    LEDAkem: a post-quantum key encapsulation mechanism based on QC-LDPC codes

    Full text link
    This work presents a new code-based key encapsulation mechanism (KEM) called LEDAkem. It is built on the Niederreiter cryptosystem and relies on quasi-cyclic low-density parity-check codes as secret codes, providing high decoding speeds and compact keypairs. LEDAkem uses ephemeral keys to foil known statistical attacks, and takes advantage of a new decoding algorithm that provides faster decoding than the classical bit-flipping decoder commonly adopted in this kind of systems. The main attacks against LEDAkem are investigated, taking into account quantum speedups. Some instances of LEDAkem are designed to achieve different security levels against classical and quantum computers. Some performance figures obtained through an efficient C99 implementation of LEDAkem are provided.Comment: 21 pages, 3 table

    LESS is More: Code-Based Signatures without Syndromes

    Get PDF
    Devising efficient and secure signature schemes based on coding theory is still considered a challenge by the cryptographic community. In this paper, we construct a signature scheme by exploring a new approach to the area. To do this, we design a zero-knowledge identification scheme, which we then render static via standard means (e.g. Fiat-Shamir). We show that practical instances of our protocol have the potential to outperform the state of the art on code-based signatures, achieving small data sizes with a low computational complexity

    Wave: A New Family of Trapdoor One-Way Preimage Sampleable Functions Based on Codes

    Get PDF
    We present here a new family of trapdoor one-way Preimage Sampleable Functions (PSF) based on codes, the Wave-PSF family. The trapdoor function is one-way under two computational assumptions: the hardness of generic decoding for high weights and the indistinguishability of generalized (U,U+V)(U,U+V)-codes. Our proof follows the GPV strategy [GPV08]. By including rejection sampling, we ensure the proper distribution for the trapdoor inverse output. The domain sampling property of our family is ensured by using and proving a variant of the left-over hash lemma. We instantiate the new Wave-PSF family with ternary generalized (U,U+V)(U,U+V)-codes to design a "hash-and-sign" signature scheme which achieves existential unforgeability under adaptive chosen message attacks (EUF-CMA) in the random oracle model. For 128 bits of classical security, signature sizes are in the order of 15 thousand bits, the public key size in the order of 4 megabytes, and the rejection rate is limited to one rejection every 10 to 12 signatures.Comment: arXiv admin note: text overlap with arXiv:1706.0806

    The Hardness of Code Equivalence over Fq\mathbf{F}_q and its Application to Code-based Cryptography

    Get PDF
    International audienceThe code equivalence problem is to decide whether two linear codes over F_q are equivalent, that is identical up to a linear isometry of the Hamming space. In this paper, we review the hardness of code equivalence over F_q due to some recent negative results and argue on the possible implications in code-based cryptography. In particular, we present an improved version of the three-pass identification scheme of Girault and discuss on a connection between code equivalence and the hidden subgroup problem

    LEDAcrypt: QC-LDPC Code-Based Cryptosystems with Bounded Decryption Failure Rate

    Get PDF
    We consider the QC-LDPC code-based cryptosystems named LEDAcrypt, which are under consideration by NIST for the second round of the post-quantum cryptography standardization initiative. LEDAcrypt is the result of the merger of the key encapsulation mechanism LEDAkem and the public-key cryptosystem LEDApkc, which were submitted to the first round of the same competition. We provide a detailed quantification of the quantum and classical computational efforts needed to foil the cryptographic guarantees of these systems. To this end, we take into account the best known attacks that can be mounted against them employing both classical and quantum computers, and compare their computational complexities with the ones required to break AES, coherently with the NIST requirements. Assuming the original LEDAkem and LEDApkc parameters as a reference, we introduce an algorithmic optimization procedure to design new sets of parameters for LEDAcrypt. These novel sets match the security levels in the NIST call and make the C reference implementation of the systems exhibit significantly improved figures of merit, in terms of both running times and key sizes. As a further contribution, we develop a theoretical characterization of the decryption failure rate (DFR) of LEDAcrypt cryptosystems, which allows new instances of the systems with guaranteed low DFR to be designed. Such a characterization is crucial to withstand recent attacks exploiting the reactions of the legitimate recipient upon decrypting multiple ciphertexts with the same private key, and consequentially it is able to ensure a lifecycle of the corresponding key pairs which can be sufficient for the wide majority of practical purposes
    corecore